我们通过严格的数学论点建设性地展示了GNN在紧凑型$ d $维欧几里得网格上的近似频带限制功能中的架构优于NN的架构。我们表明,前者只需要$ \ MATHCAL {m} $采样函数值就可以实现$ o_ {d}的均匀近似错误(2^{ - \ \ m athcal {m} {m}^{1/d/d/d}}}}} $从某种意义上说,这个错误率是最佳的,NNS可能会取得更糟的情况。
translated by 谷歌翻译
我们介绍了Net2Brain,这是一种图形和命令行的用户界面工具箱,用于比较人工深神经网络(DNNS)和人脑记录的代表空间。尽管不同的工具箱仅促进单个功能或仅关注一小部分监督图像分类模型,但Net2Brain允许提取600多个受过培训的DNN的激活,以执行各种视觉相关的任务(例如,语义段,深度估计,深度估计,深度估计,深度估计,估计,深度率,在图像和视频数据集上均具有动作识别等)。该工具箱在这些激活上计算代表性差异矩阵(RDM),并使用代表性相似性分析(RSA),加权RSA(在特定的ROI和探照灯搜索中)将其与大脑记录进行比较。此外,可以在工具箱中添加一个新的刺激和大脑记录数据集以进行评估。我们通过一个示例展示了如何使用Net2Brain的功能和优势来检验认知计算神经科学的假设。
translated by 谷歌翻译
结肠镜检查的柔性内窥镜由于其固有的复杂性而产生了一些局限性,导致患者不适和缺乏临床医生的直觉。机器人设备和自主控制代表了一种可行的解决方案,以减少内镜医生的工作量和训练时间,同时改善整体程序结果。自主内窥镜控制的先前工作使用启发式政策,将其概括限制在非结构化和高度可变形的结肠环境中,需要频繁进行人类干预。这项工作提出了一种基于图像的内窥镜控制,使用深钢筋学习,称为深度视觉运动控制(DVC),以在结肠道的复杂部分中表现出适应性行为。 DVC学习内窥镜图像与内窥镜的控制信号之间的映射。对20位专家胃肠道内镜医生进行的首次用户研究是为了将其导航性能与使用现实的虚拟模拟器进行比较的DVC策略。结果表明,DVC在几个评估参数上显示出同等的性能,更安全。此外,与最先进的启发式控制政策相比,对20名新手参与者进行了第二次用户研究,以证明人类的监督更容易。对结肠镜检查程序的无缝监督将使干预主义者能够专注于医疗决策,而不是内窥镜的控制问题。
translated by 谷歌翻译
在无法明确计算系统状态(例如操纵可变形物体)的应用程序中,视觉动作计划特别出色,因为它可以直接从原始图像中进行计划。尽管深度学习技术已经显着加速了该领域,但其成功的关键要求是大量数据的可用性。在这项工作中,我们建议在数据稀缺的情况下实现视觉行动计划,以实现视觉行动计划。我们建立在潜在的空间路线图(LSR)框架上,该框架通过在低维潜在空间中建造的图表执行计划。特别是,ACE用于i)通过自动创建新的数据点来增强可用培训数据集,ii)在潜在图中的状态表示之间创建新的未观察到的连接;方式。我们在模拟框堆叠和现实世界折叠任务上验证了所提出的方法,分别显示了刚性和可变形的对象操纵任务的适用性。
translated by 谷歌翻译
对人类的逼真渲染和安息对于实现增强现实体验至关重要。我们提出了一个新颖的框架,以重建人类和场景,可以用新颖的人类姿势和景色从一个单一的野外视频中呈现。给定一个由移动摄像机捕获的视频,我们训练了两个NERF模型:人类NERF模型和一个场景NERF模型。为了训练这些模型,我们依靠现有方法来估计人类和场景的粗糙几何形状。这些粗糙的几何估计值使我们能够创建一个从观察空间到独立姿势独立的空间的翘曲场10秒的视频剪辑,并以新颖的观点以及背景提供新颖的姿势,提供人类的高质量效果。
translated by 谷歌翻译
为了在难以解决布尔可满足问题实例的同时对多线程模式进行多线程模式的不确定终止行为,在难以解决布尔满足性问题实例,已收集和分析内部求解器运行时参数。已经选择了这些参数的子集,并使用作为特征向量,以成功创建一个机器学习模型,以便使用尚未解决的实例的任何新的解决操作来成功为求解程序终止行为的二进制分类创建机器学习模型。该模型可用于早期估计解决唯一的尝试或不属于候选人的候选人,以便快速终止。在这种情况下,运行时特征的主动简介的组合似乎镜像求解器瞬间启发式的影响,以了解解决者解决程序的立即质量。由于已经前两个解决迭代的运行时参数足以预测尝试良好成功分数的终止,所以当前工作的结果提供了有希望的基础,这可以进一步发展,以便丰富加密或通常具有现代卫星的加密AI能力。
translated by 谷歌翻译
Variational inference uses optimization, rather than integration, to approximate the marginal likelihood, and thereby the posterior, in a Bayesian model. Thanks to advances in computational scalability made in the last decade, variational inference is now the preferred choice for many high-dimensional models and large datasets. This tutorial introduces variational inference from the parametric perspective that dominates these recent developments, in contrast to the mean-field perspective commonly found in other introductory texts.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译